Goto

Collaborating Authors

 processing engine



HOAA: Hybrid Overestimating Approximate Adder for Enhanced Performance Processing Engine

Kokane, Omkar, Sati, Prabhat, Lokhande, Mukul, Vishvakarma, Santosh Kumar

arXiv.org Artificial Intelligence

This paper presents the Hybrid Overestimating Approximate Adder designed to enhance the performance in processing engines, specifically focused on edge AI applications. A novel Plus One Adder design is proposed as an incremental adder in the RCA chain, incorporating a Full Adder with an excess 1 alongside inputs A, B, and Cin. The design approximates outputs to 2 bit values to reduce hardware complexity and improve resource efficiency. The Plus One Adder is integrated into a dynamically reconfigurable HOAA, allowing runtime interchangeability between accurate and approximate overestimation modes. The proposed design is demonstrated for multiple applications, such as Twos complement subtraction and Rounding to even, and the Configurable Activation function, which are critical components of the Processing engine. Our approach shows 21 percent improvement in area efficiency and 33 percent reduction in power consumption, compared to state of the art designs with minimal accuracy loss. Thus, the proposed HOAA could be a promising solution for resource-constrained environments, offering ideal trade-offs between hardware efficiency vs computational accuracy.


New method of US Army accelerates AI decision-making

#artificialintelligence

U.S. Army DEVCOM Army Research Laboratory Public Affairs explains. Notice that some information is not captured in the compressed image, e.g., the orange cone in the original frame does not appear to have the orange color in the compressed frame. The point though is that the information needed to maintain detection performance is preserved and other information is thrown out to reduce the image size. The compressed/reconstructed image is 48.5kB compared to the 2.5MB for the original, which is only 2% of the original size. Researchers from the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory and university partners from the Internet of Battlefield Things Collaborative Research Alliance, or IoBT CRA, developed a new solution to provide battlefield applications with pressing machine intelligence, even when the local environment is not able to facilitate AI processing.


Towards a Generic Multimodal Architecture for Batch and Streaming Big Data Integration

Yousfi, Siham, Rhanoui, Maryem, Chiadmi, Dalila

arXiv.org Artificial Intelligence

Big Data are rapidly produced from various heterogeneous data sources. They are of different types (text, image, video or audio) and have different levels of reliability and completeness. One of the most interesting architectures that deal with the large amount of emerging data at high velocity is called the lambda architecture. In fact, it combines two different processing layers namely batch and speed layers, each providing specific views of data while ensuring robustness, fast and scalable data processing. However, most papers dealing with the lambda architecture are focusing one single type of data generally produced by a single data source. Besides, the layers of the architecture are implemented independently, or, at best, are combined to perform basic processing without assessing either the data reliability or completeness. Therefore, inspired by the lambda architecture, we propose in this paper a generic multimodal architecture that combines both batch and streaming processing in order to build a complete, global and accurate insight in near-real-time based on the knowledge extracted from multiple heterogeneous Big Data sources. Our architecture uses batch processing to analyze the data structures and contents, build the learning models and calculate the reliability index of the involved sources, while the streaming processing uses the built-in models of the batch layer to immediately process incoming data and rapidly provide results. We validate our architecture in the context of urban traffic management systems in order to detect congestions.


ESTemd: A Distributed Processing Framework for Environmental Monitoring based on Apache Kafka Streaming Engine

Akanbi, Adeyinka

arXiv.org Artificial Intelligence

Distributed networks and real-time systems are becoming the most important components for the new computer age, the Internet of Things (IoT), with huge data streams or data sets generated from sensors and data generated from existing legacy systems. The data generated offers the ability to measure, infer and understand environmental indicators, from delicate ecologies and natural resources to urban environments. This can be achieved through the analysis of the heterogeneous data sources (structured and unstructured). In this paper, we propose a distributed framework Event STream Processing Engine for Environmental Monitoring Domain (ESTemd) for the application of stream processing on heterogeneous environmental data. Our work in this area demonstrates the useful role big data techniques can play in an environmental decision support system, early warning and forecasting systems. The proposed framework addresses the challenges of data heterogeneity from heterogeneous systems and real time processing of huge environmental datasets through a publish/subscribe method via a unified data pipeline with the application of Apache Kafka for real time analytics.


Real-Time Machine Learning

#artificialintelligence

Imagine this scenario: You have an app that uses machine learning and you want the app to learn from your user's data in real-time. That means as new user data is generated, your app is able to make predictions and perform training on the incoming data-stream to improve itself automatically. How would you go about building this? Take some time to stare at this chart, it's an example of this pipeline. That text data is being streamed in real-time using a software product called "Apache Kafka" to a model.


Report: Latency Issues Hamper Digital Business Advances - RTInsights

#artificialintelligence

To address latency challenges organizations are adopting streaming processing engines, in-memory computing database, and soon 5G wireless services. The rate at which transactions are processed has always been of critical importance in the enterprise. However, as organizations look to digitize business processes latency is rapidly becoming a bigger hurdle to overcome. A survey of 351 IT decision makers across five vertical industries conducted by Hazelcast, a provider of an in-memory computing platform, finds well over half of organizations (58%) are now are measuring performance in milliseconds and microseconds (58%) versus seconds. To put that in some perspective, the report notes the average blink of an eye occurs in 300 milliseconds.


Dev Kit Weekly: BeagleBone AI

#artificialintelligence

This week we jump right into a descendant of one of the original open hardware maker pro development kits that has gotten a facelift for artificial intelligence applications – BeagleBone AI. The BeagleBone AI is a performance-packed platform is based on Texas Instruments' Sitara AM5729 SoC, which integrates so much heterogeneous compute power it's actually tucked away underneath a heatsink on the board. Dual TI C66x floating-point DSPs, four Embedded Vision Engines, a dual-core PowerVR 3D GPU, a Vivante 2D graphics accelerator, dual Arm Cortex-A15 cores, dual Arm Cortex-M4 cores, an H.264 video encode/decode subsystem, and two programmable real-time units (RTUs).Beyond just a long list of processing engines, all of that diverse compute makes the BeagleBone AI a perfect match for a wide range of neural networking applications. For instance, high-speed video interfaces on the board can be used to pipe in video streams through the encode/decode subsystem and pass them on to a CNN running on one of the DSPs or GPU. Meanwhile, code or control logic running on one of the Arm cores or PRUs could trigger an action.


MEDICI How Insurers Are Applying Machine Learning

#artificialintelligence

Just like financial institutions, insurers are no strangers to leveraging advanced technologies in various aspects of the business. Some of the practical applications of machine learning in the insurance industry include managing broker business, optimizing direct marketing, understanding quote conversion, computing optimal pricing, detecting fraud, claims triage, predicting litigation, targeting inspections and audits, forecasting claims, retaining customers, and, finally, recalibrating prices. Extensive research by Satadru Sengupta, General Manager & Data Scientist, Insurance at DataRobot, explores particular ways machine learning can impact operational efficiency. Let's take a closer look at some interesting examples and partnerships. Insurance executives need accurate loss predictions so that they can set reserves appropriately.


A Decade Later, Apache Spark Still Going Strong

#artificialintelligence

Don't look now but Apache Spark is about to turn 10 years old. The open source project began quietly at UC Berkeley in 2009 before emerging as an open source project in 2010. For the past five years, Spark has been on an absolute tear, becoming one of the most widely used technologies in big data and AI. Let's take a look at Spark's remarkable run up to this point, and see where it might be headed next. Apache Spark is best known as the in-memory replacement for MapReduce, the disk-based computational engine at the heart of early Hadoop clusters.